Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            A core component present in many successful neural network architectures, is an MLP block of two fully connected layers with a non-linear activation in between. An intriguing phenomenon observed empirically, including in transformer architectures, is that, after training, the activations in the hidden layer of this MLP block tend to be extremely sparse on any given input. Unlike traditional forms of sparsity, where there are neurons/weights which can be deleted from the network, this form of {\em dynamic} activation sparsity appears to be harder to exploit to get more efficient networks. Motivated by this we initiate a formal study of PAC learnability of MLP layers that exhibit activation sparsity. We present a variety of results showing that such classes of functions do lead to provable computational and statistical advantages over their non-sparse counterparts. Our hope is that a better theoretical understanding of {\em sparsely activated} networks would lead to methods that can exploit activation sparsity in practice.more » « less
- 
            We study the differentially private (DP) empirical risk minimization (ERM) problem under the semi-sensitive DP setting where only some features are sensitive. This generalizes the Label DP setting where only the label is sensitive. We give improved upper and lower bounds on the excess risk for DP-ERM. In particular, we show that the error only scales polylogarithmically in terms of the sensitive domain size, improving upon previous results that scale polynomially in the sensitive domain size (Ghazi et al., 2021).more » « less
- 
            To advance understanding of the multihazard performance of midrise cold-formed steel (CFS) construction, a unique multidisciplinary experimental program was conducted on the Large High-Performance Outdoor Shake Table (LHPOST) at the University of California, San Diego (UCSD). The centerpiece of this project involved earthquake and live fire testing of a full-scale 6-story CFS wall braced building. Initially, the building was subjected to seven earthquake tests of increasing motion intensity, sequentially targeting service, design, and maximum credible earthquake (MCE) demands. Subsequently, live fire tests were conducted on the earthquake-damaged building at two select floors. Finally, for the first time, the test building was subjected to two postfire earthquake tests, including a low-amplitude aftershock and an extreme near-fault target MCE-scaled motion. In addition, low-amplitude white noise and ambient vibration data were collected during construction and seismic testing phases to support identification of the dynamic state of the building system. This paper offers an overview of this unique multihazard test program and presents the system-level structural responses and physical damage features of the test building throughout the earthquake-fire-earthquake test phases, whereas the component-level seismic behavior of the shear walls and seismic design implications of CFS-framed building systems are discussed in a companion paper.more » « less
- 
            Separations: We introduce a monotone variant of Xor-Sat and show it has exponential monotone circuit complexity. Since Xor-Sat is in NC^2, this improves qualitatively on the monotone vs. non-monotone separation of Tardos (1988). We also show that monotone span programs over R can be exponentially more powerful than over finite fields. These results can be interpreted as separating subclasses of TFNP in communication complexity. Characterizations: We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (Karchmer - Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995). Characterizations: We show that the communication (resp. query) analogue of PPA (subclass of TFNP) captures span programs over F_2 (resp. Nullstellensatz degree over F_2). Previously, it was known that communication FP captures formulas (Karchmer-Wigderson, 1988) and that communication PLS captures circuits (Razborov, 1995).more » « less
- 
            We introduce a new technique for reducing the dimension of the ambient space of low-degree polynomials in the Gaussian space while preserving their relative correlation structure. As an application, we obtain an explicit upper bound on the dimension of an epsilon-optimal noise-stable Gaussian partition. In fact, we address the more general problem of upper bounding the number of samples needed to epsilon-approximate any joint distribution that can be non-interactively simulated from a correlated Gaussian source. Our results significantly improve (from Ackermann-like to "merely" exponential) the upper bounds recently proved on the above problems by De, Mossel & Neeman [CCC 2017, SODA 2018 resp.] and imply decidability of the larger alphabet case of the gap non-interactive simulation problem posed by Ghazi, Kamath & Sudan [FOCS 2016]. Our technique of dimension reduction for low-degree polynomials is simple and can be seen as a generalization of the Johnson-Lindenstrauss lemma and could be of independent interest.more » « less
- 
            Unlike compressive sensing where the measurement outputs are assumed to be real-valued and have infinite precision, in one-bit compressive sensing, measurements are quantized to one bit, their signs. In this work, our contributions are as follows: 1. We show how to recover the support of sparse high-dimensional vectors in the 1-bit compressive sensing framework with an asymptotically near-optimal number of measurements. We do this by showing an equivalence between the task of support recovery using 1-bit compressive sensing and a well-studied combinatorial object known as Union Free Families. 2. We also improve the bounds on the number of measurements for approximately recovering vectors from 1-bit compressive sensing measurements. All our results are about universal measurements, namely the measurement schemes that work simultaneously for all sparse vectors. Our improved bounds naturally lead the way to suggest several interesting open problems.more » « less
- 
            We prove that the P^NP-type query complexity (alternatively, decision list width) of any boolean function f is quadratically related to the P^NP-type communication complexity of a lifted version of f. As an application, we show that a certain "product" lower bound method of Impagliazzo and Williams (CCC 2010) fails to capture P^NP communication complexity up to polynomial factors, which answers a question of Papakonstantinou, Scheder, and Song (CCC 2014).more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available